Skip to content

Experimental data

Loading and working with experimental data¤

import x_xy

import jax
import jax.numpy as jnp
import numpy as np
import matplotlib.pyplot as plt

import mediapy as media

def show_video(sys, xs: x_xy.Transform) -> None:
    assert sys.dt == 0.01
    # only render every fourth to get a framerate of 25 fps
    frames = x_xy.render(sys, [xs[i] for i in range(0, xs.shape(), 4)], camera="c",
                         height=480, width=640, add_cameras={-1: '<camera name="c" mode="targetbody" target="3" pos=".5 -.5 1.25"/>'})
    # convert rgba to rgb
    frames = [frame[..., :3] for frame in frames]
    media.show_video(frames, fps=25)

Experimental data and system definitions of the experimental setup are located in the sub-package exp_data

from x_xy.subpkgs import exp

Multiple experimental trials will be made available. Right now only the trial with identifiers S_04/6 is available.

exp_id = "S_06"
sys = exp.load_sys(exp_id)

Let's first take a look at the system that was used in the experiments.

state = x_xy.State.create(sys)
# update the maximal coordinates
xs = x_xy.forward_kinematics(sys, state)[1].x.batch()
show_video(sys, xs)
Rendering frames..: 100%|██████████| 1/1 [00:00<00:00, 48.62it/s]

As you can see a five segment kinematic chain was moved, and for each segment IMU measurements and OMC ground truth is available.

Let's load this (no simulated) IMU and OMC data.

# `canonical` is the identifier of the first motion pattern performed in this trial
# `shaking` is the identifier of the last motion pattern performed in this trial
data = exp.load_data(exp_id, motion_start="canonical", motion_stop="shaking")
data.keys()
dict_keys(['seg1', 'seg2', 'seg3', 'seg4', 'seg5'])
data["seg1"].keys()
dict_keys(['imu_flex', 'imu_rigid', 'marker1', 'marker2', 'marker3', 'marker4', 'quat'])
data["seg1"]["imu_rigid"].keys()
dict_keys(['acc', 'gyr', 'mag'])

The quaternion quat is to be interpreted as the rotation from segment to an arbitrary OMC inertial frame.

The position marker1 is to be interpreted as the position vector from arbitrary OMC inertial frame to a specific marker (marker 1) on the respective segment (vector given in the OMC inertial frame).

Then, for each segment actually two IMUs are attached to it. One is rigidly attached, one is non-rigidly attached (via foam).

Also, how long is the trial?

data["seg1"]["marker1"].shape
(32500, 3)

It's 325 seconds of data.

Let's take a look at the motion of the whole trial.

To render it, we need maximal coordinates xs of all links in the system. Luckily, there exists a function inside sub-package sim2real that does this already (but please check out the source).

from x_xy.subpkgs import sim2real

# this function is required because we have four markers per segment, so four positions
# that we could use as the position of the frame for that segment. Inside the xml string
# there is at the bottom comments that define which marker should be used for which segment
# because they are close
link_name_pos_rot = exp.link_name_pos_rot_data(data, exp.load_xml_str("S_06"))

xs = sim2real.xs_from_raw(sys, link_name_pos_rot, qinv=True)
show_video(sys, xs)
Rendering frames..: 100%|██████████| 8125/8125 [00:57<00:00, 142.21it/s]

Perfect. This is a rendered animation of the real experimental motion that was performed. You can see that the spacing between segments is not perfect.

This is because in our idealistic system model joints have no spatial dimension but in reality they have. The entire setup is 3D printed, and the joints are also several centimeters long.

The segments are 20cm long.

We can use this experimental data to validate our simulated approaches or validate ML models that are learned on simulated training data.